# Cross-lingual Transfer
Mbart Large 50 Finetuned Xlsum Summarization
mBART-large-50 is a multilingual sequence-to-sequence model supporting text summarization and generation tasks in 50 languages.
Text Generation
Transformers

M
skripsi-summarization-1234
28.54k
0
Eriberta Base
Apache-2.0
EriBERTa is a bilingual domain-specific language model pre-trained on a massive corpus of clinical medical texts. It surpasses all previous Spanish-language models in the clinical domain, demonstrating exceptional medical text comprehension and information extraction capabilities.
Large Language Model
Transformers Supports Multiple Languages

E
HiTZ
728
3
Xlm Roberta Large Ehri Ner All
A multilingual Holocaust-related named entity recognition model fine-tuned based on XLM-RoBERTa, supporting 9 languages with an F1 score of 81.5%.
Sequence Labeling
Transformers Supports Multiple Languages

X
ehri-ner
208
3
Tinysolar 248m 4k Py
Apache-2.0
Inference information: This is an open-source model based on the Apache-2.0 license, specific functionalities need to be elaborated
Large Language Model
Transformers

T
upstage
86
4
Umt5 Xl
Apache-2.0
A multilingual text generation model pretrained on the mC4 multilingual corpus, supporting 107 languages
Large Language Model
Transformers Supports Multiple Languages

U
google
1,049
17
Umt5 Small
Apache-2.0
A unified multilingual T5 model pre-trained on the mC4 multilingual corpus, covering 107 languages
Large Language Model
Transformers Supports Multiple Languages

U
google
17.35k
23
Distilbert Base Multilingual Cased Sentiments Student
Apache-2.0
This is a multilingual sentiment analysis model trained using zero-shot distillation technology, supporting sentiment classification in 12 languages.
Text Classification
Transformers Supports Multiple Languages

D
lxyuan
498.23k
283
Ru Bart Large
MIT
This is a pruned version of facebook/mbart-large-50, retaining only Russian and English embedding vectors, with the vocabulary reduced from 250k to 25k.
Large Language Model
Transformers Supports Multiple Languages

R
sn4kebyt3
83
9
Ernie M Base Mnli Xnli
Apache-2.0
This is a multilingual model supporting 100 languages, specifically designed for natural language inference (NLI) and zero-shot classification tasks. It is based on the Baidu ERNIE-M architecture and fine-tuned on the XNLI and MNLI datasets.
Large Language Model
Transformers Supports Multiple Languages

E
MoritzLaurer
29
3
Multilingual Bert Finetuned Xquad
Apache-2.0
A multilingual Q&A model fine-tuned on the xquad dataset based on bert-base-multilingual-cased
Question Answering System
Transformers

M
ritwikm
24
0
Persian Xlm Roberta Large
A QA model fine-tuned on the Persian QA dataset PQuAD based on the XLM-RoBERTA multilingual pretrained model
Question Answering System
Transformers

P
pedramyazdipoor
77
3
Xlm Roberta Base Finetuned Panx All
MIT
A fine-tuned version based on the XLM-RoBERTa-base model on a specific dataset, primarily used for sequence labeling tasks, with an evaluated F1 score of 0.8561.
Large Language Model
Transformers

X
huangjia
29
0
Xlm Roberta Base Finetuned Panx De
MIT
A German token classification model fine-tuned on the xtreme dataset based on xlm-roberta-base
Sequence Labeling
Transformers

X
dfsj
15
0
Xlm Roberta Base Finetuned Panx De
MIT
A German token classification model fine-tuned on the xtreme dataset based on XLM-RoBERTa-base
Sequence Labeling
Transformers

X
davidenam
27
0
Xlm Roberta Base Finetuned Panx De
MIT
A German token classification model fine-tuned on the xtreme dataset based on XLM-RoBERTa-base, designed for named entity recognition tasks.
Sequence Labeling
Transformers

X
frahman
25
0
Xlm Roberta Base Finetuned Panx De Fr
MIT
A fine-tuned version of the XLM-RoBERTa-base model on German and French datasets, primarily used for named entity recognition tasks.
Large Language Model
Transformers

X
osanseviero
14
0
Xlm Roberta Base Ft Udpos28 Sv
Apache-2.0
A multilingual POS tagging model based on XLM-RoBERTa, fine-tuned on Universal Dependencies v2.8, supporting POS tagging tasks in Swedish and multiple other languages.
Sequence Labeling
Transformers Other

X
wietsedv
72
0
Xlmr Formality Classifier
A multilingual text formality classification model based on XLM-Roberta, supporting English, French, Italian, and Portuguese
Text Classification
Transformers Supports Multiple Languages

X
s-nlp
795
11
Xlm Roberta Base
XLM-RoBERTa is a multilingual pre-trained model based on the RoBERTa architecture, supporting 100 languages and suitable for cross-lingual understanding tasks.
Large Language Model
Transformers

X
kornesh
30
1
Xlm Roberta Base Finetuned Panx De
MIT
This model is a fine-tuned version of xlm-roberta-base on the xtreme dataset for German token classification tasks.
Sequence Labeling
Transformers

X
osanseviero
14
0
Stsb M Mt Es Distilbert Base Uncased
This is a test model fine-tuned using the Spanish dataset from stsb_multi_mt for semantic text similarity tasks.
Text Embedding Spanish
S
eduardofv
37
2
Xlm Roberta Base Finetuned Luganda Finetuned Ner Swahili
This is a named entity recognition model based on the XLM-RoBERTa model, fine-tuned on the Swahili portion of the MasakhaNER dataset.
Sequence Labeling
Transformers Other

X
mbeukman
17
0
Xlm Roberta Large
XLM-RoBERTa-large is a multilingual pretrained language model based on the RoBERTa architecture, supporting various natural language processing tasks in multiple languages.
Large Language Model
Transformers

X
kornesh
2,154
0
Distilbert Base Multilingual Cased Sentiment 2
Apache-2.0
This is a multilingual text sentiment analysis model based on DistilBERT, fine-tuned on the amazon_reviews_multi dataset, supporting sentiment classification tasks in multiple languages.
Text Classification
Transformers

D
philschmid
1,520
4
Xlm Roberta Base Finetuned Marc En
MIT
A multilingual text classification model fine-tuned on the amazon_reviews_multi dataset based on XLM-RoBERTa-base
Large Language Model
Transformers

X
daveccampbell
29
0
Tner Xlm Roberta Base Ontonotes5
A named entity recognition model fine-tuned on XLM-RoBERTa, supporting token classification tasks in English text.
Sequence Labeling
Transformers English

T
asahi417
17.30k
5
Tner Xlm Roberta Large All English
A named entity recognition model fine-tuned based on XLM-RoBERTa, supporting entity recognition tasks in English text.
Sequence Labeling
Transformers

T
asahi417
5,023
1
Roberta Large Mnli
XLM-RoBERTa is a multilingual pretrained model based on the RoBERTa architecture, supporting 100 languages and excelling in cross-lingual understanding tasks.
Large Language Model
Transformers Other

R
typeform
119
7
Unispeech 1350 En 17h Ky Ft 1h
A speech recognition model based on Microsoft's UniSpeech architecture, specifically fine-tuned for the Kyrgyz language
Speech Recognition
Transformers Other

U
microsoft
39
1
Bert Turkish Text Classification
This is a Turkish text classification model fine-tuned based on the BERT architecture, capable of classifying Turkish texts into 7 predefined categories.
Text Classification Other
B
savasy
523
22
Hindi Tpu Electra
A Hindi pre-trained language model based on the ELECTRA architecture, outperforming multilingual BERT on various Hindi NLP tasks
Large Language Model
Transformers Other

H
monsoon-nlp
25
1
Xlm Roberta Base Ft Udpos28 Cy
Apache-2.0
A multilingual POS tagging model based on XLM-RoBERTa, fine-tuned on Universal Dependencies v2.8 with special optimization for Welsh
Sequence Labeling
Transformers Other

X
wietsedv
15
0
Srl En Mbert Base
Apache-2.0
This model is a bert-base-multilingual-cased model fine-tuned on English CoNLL-formatted OntoNotes v5.0 semantic role labeling data, primarily used for semantic role labeling tasks.
Sequence Labeling Supports Multiple Languages
S
liaad
93
2
Litlat Bert
LitLat BERT is a trilingual model based on the xlm-roberta-base architecture, focusing on Lithuanian, Latvian, and English performance.
Large Language Model
Transformers Supports Multiple Languages

L
EMBEDDIA
937
5
Xlm Roberta Large
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, trained with a masked language modeling objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
5.3M
431
Finest Bert
FinEst BERT is a trilingual model based on the bert-base architecture, specializing in Finnish, Estonian, and English processing. It outperforms multilingual BERT models while retaining cross-lingual knowledge transfer capabilities.
Large Language Model Supports Multiple Languages
F
EMBEDDIA
35
2
Xlm Roberta Base
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, using masked language modeling as the training objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
9.6M
664
Ke T5 Base Ko
Apache-2.0
KE-T5 is a Korean-English bilingual text generation model based on the T5 architecture, developed by the Korea Electronics Technology Institute, supporting cross-lingual knowledge transfer for dialogue generation tasks.
Large Language Model Korean
K
KETI-AIR
208
9
Agri Gpt2
This is a versatile large language model capable of handling various natural language processing tasks
Large Language Model
A
Mamatha
15
1
Featured Recommended AI Models